Personnel
Overall Objectives
Research Program
Application Domains
Highlights of the Year
New Software and Platforms
New Results
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Dissemination
Bibliography
XML PDF e-pub
PDF e-Pub


Section: New Results

Computational Statistical Physics

Participants : Grégoire Ferré, Frédéric Legoll, Tony Lelièvre, Pierre Monmarché, Boris Nectoux, Mouad Ramil, Julien Roussel, Laura Silva Lopes, Gabriel Stoltz, Pierre Terrier.

The objective of computational statistical physics is to compute macroscopic properties of materials starting from a microscopic description of materials, using concepts of statistical physics (thermodynamic ensembles and molecular dynamics). The contributions of the team can be divided into four main topics: (i) the computation of thermodynamic quantities by sampling the canonical measure; (ii) the sampling of the stationary measure of non-equilibrium systems (namely non-reversible dynamics); (iii) the efficient computation of dynamical properties which requires to sample metastable trajectories; (iv) coarse-graining techniques to reduce the computational cost of molecular dynamic simulations and gain some insights on the models.

Sampling of the canonical measure, free energy calculations and adaptive biasing techniques

The work by T. Lelièvre and G. Stoltz, together with G. Fort (Toulouse) and B. Jourdain (CERMICS), on the study of a dynamics similar to the well-tempered metadynamics has been published [19]. This dynamics can be seen as an extension of the so-called self-healing umbrella sampling method, with a partial biasing of the dynamics only. In particular, the authors proposed a version which leads to much shorter exit times from metastable states (accelerated well-tempered metadynamics).

In [29], T. Lelièvre, in collaboration with C. Chipot (Nancy), T. Zhao, H. Fu, X. Shao, and W. Cai (Nankai University) proposed a new version of the adaptive biasing force (ABF) technique, which is well suited for the computation of free energy landscapes in high dimensions. In addition, V. Ehrlacher, T. Lelièvre and P. Monmarché are currently developping a tensorized version of the ABF algorithms. As in the usual ABF algorithm, the objective is still to compute in an adaptive way (through MCMC computations) the free energy A of a molecular system, which is a function of given reaction coordinates. To keep in memory an approximation of A requires a numerical grid of size md where d is the number of reaction coordinates and m is the number of points in a 1-d grid. This prevents d to be larger than 4. To allow for larger number of reaction coordinates, A is approximated as a sum of tensor products of functions of only one variable which only requires a memory of size Nmd, where N is the number of tensor products used in the approximation.

In [53], G. Stoltz and E. Vanden-Eijnden (Courant Institute) have studied the properties of the temperature accelerated molecular dynamics method. This dynamics provides a way to compute the free energy. It consists in introducing an extended variable into the system, coupled to the chosen reaction coordinate, and evolving at a higher temperature in order to alleviate metastable behavior, while the dynamics of the system at lower temperature is accelerated. G. Stoltz and E. Vanden-Eijnden proved in particular that the law of the dynamics converges exponentially fast to the steady-state, with a rate which is dictated by the Poincaré inequality of the effective dynamics on the free energy surface at higher temperature. This work was performed while E. Vanden-Eijnden was spending two months as an Inria invited professor in the project-team.

Sampling of out-of-equilibrium dynamics

Together with A. Iacobucci and S. Olla (Univ. Dauphine), G. Stoltz studied in [20] the convergence to the steady-state of nonequilibrium Langevin dynamics, by a perturbative approach based on hypocoercive techniques developed for equilibrium Langevin dynamics. The Hamiltonian and overdamped limits (corresponding respectively to frictions going to zero or infinity) were carefully investigated. In particular, the maximal magnitude of admissible perturbations are quantified as a function of the friction. Numerical results based on a Galerkin discretization of the generator of the dynamics confirmed the relevance of the theoretical lower bounds on the spectral gap.

J. Roussel and G. Stoltz have proven the consistency of the Galerkin method for hypocoercive operators in [52]. This method allows to solve Poisson problems related to the Fokker-Planck equation very efficiently for small-dimensional systems, even if the dynamics is hypocoercive, as is the case for the Langevin dynamics for example. J. Roussel and G. Stoltz showed in particular the exponential convergence of the semigroup associated with the projected generator and provide error estimates for the solution of the numerical method, under assumptions that are proven to hold for a toy model. The authors illustrated these results by numerical experiments. In addition, an ongoing work by J. Roussel and G. Stoltz focuses on the use of control variates for non-equilibrium systems. Whereas most variance reduction methods rely on the knowledge of the invariant probability measure, this latter is not explicit out of equilibrium. Control variates offer an attractive alternative in this framework. J. Roussel and G. Stoltz proposed a general strategy for constructing an efficient control variate, relying on physical simplifications of the dynamics. The authors provide an asymptotic analysis of the variance reduction in a perturbative framework, along with extensive numerical tests on three different systems.

G. Ferré is currently working on sampling problems and rare event estimates, in particular with nonequilibrium methods. During this year, he focused on a range of methods related to the estimation of rare event probabilities, mostly based on Feynman-Kac semigroups. These processes correspond to stochastic differential equations whose trajectories are weighted, which is a form of importance sampling. This project resulted in a work on the discretization of such processes (error estimates on ergodic properties, with G. Stoltz), and led to the study of adaptive techniques, with H. Touchette (Stellenbosch). These two works will lead to publications in a close future. This research also raises questions on the long-time stability of Feynman-Kac semigroups, an issue partially covered by the litterature. G. Ferré is currently addressing this subject with G. Stoltz and M. Rousset (Inria Rennes). Other long-term projects are ongoing: one on exclusion processes with M. Simon (Inria Lille), and one on random matrices and Coulomb Gases with D. Chafai (Dauphine).

Sampling of dynamical properties and rare events

The sampling of dynamical properties along molecular dynamics trajectories is crucial to get access to important quantities such as transition rates or reactive paths. This is difficult numerically because of the metastability of trajectories. We are following two numerical approaches to sample metastable trajectories: the accelerated dynamics à la A.F. Voter and the adaptive multilevel splitting (AMS) technique to sample reactive paths between metastable states.

To analyze accelerated dynamics algorithms (and in particular the Temperature Accelerated Dynamics algorithm), one needs to show that the exit event from a metastable state for the Langevin or overdamped Langevin dynamics can be approximated by a kMC model parameterized by the Eyring-Kramers laws. In [45], G. Di Gesu, T. Lelièvre and B. Nectoux, together with D. Le Peutrec (Université de Paris Saclay), used the quasi-stationary distribution approach in order to justify the use of kinetic Monte Carlo models parameterized by the Eyring-Kramers formulas to describe exit events from metastable states. The proof is based on tools from semi-classical analysis.

Concerning the AMS technique, two recent contributions showed the interest of this approach in different applicative fields. In [51], L. Silva Lopes and T. Lelièvre analyzed the performance of the AMS method for biological systems on a simple test case: the alanine dipeptide. The interest of the method was demonstrated on this simple example: it enables to compute transition rates, to sample transition paths, and to compute reactive fluxes between two metastable states. In [26], T. Lelièvre in collaboration with H. Louvin (CEA), E. Dumonteil (IRSN), M. Rousset (Inria Rennes) and C.M. Diop (CEA) implemented the AMS method in the framework of nuclear safety. The idea was to use the AMS method to compute neutron fluxes in strongly absorbing media, for shielding applications. The method has been implemented in Tripoli 4, and gives very interesting results compared to the classical exponential biasing approach, in particular for neutron branching processes.

Coarse-graining

In [25], F. Legoll and T. Lelièvre, in collaboration with S. Olla (Dauphine), analyzed the error introduced when deriving an effective dynamics for a stochastic process in large dimension on a few degrees of freedom using a projection approach à la Zwanzig. More precisely, a pathwise error estimate was obtained, which is an improvement compared to a previous result by F. Legoll and T. Lelièvre where only the marginal in times were considered. This analysis is also useful to obtain quantitative estimate for some averaging procedure on two-scale dynamics.

G. Stoltz developed new numerical methods to stabilize the time discretization of generalizations of Langevin dynamics, more precisely dissipative particle dynamics with energy conservation (DPDE) and smoothed dissipative particle dynamics (SDPD). The latter case was studied with a PhD student, Gérôme Faure (CEA/DAM and CERMICS). These two models describe mesoscopic systems of particles with two global invariants: energy and momentum. The numerical schemes are obtained as the composition of a Verlet integration of the deterministic part of the dynamics, and successive integration of the pairwise fluctuation-dissipation dynamics. These elementary dynamics are the one which need to be stabilized because too large timesteps can lead to negative internal energies of the particles. The idea of the methods is to rewrite the elementary 8-dimensional fluctuation-dissipation dynamics as effective reversible one-dimensional dynamics on the relative velocities, which can then be Metropolized; see [27] for DPDE and [18] for SDPD.

In [28], a joint work with Manuel Athènes, Thomas Jourdan (CEA/Saclay SRMP) and Gilles Adjanor (EDF R&D, MMC), G. Stoltz and P. Terrier presented a coupling algorithm for cluster dynamics. Rate equation cluster dynamics (RECD) is a mean field technique where only defect concentrations are considered. It consists in solving a large set of ODEs (one equation per cluster type) governing the evolution of the concentrations. Since clusters might contain up to million of atoms or defects, the number of equations becomes very large. Therefore solving such a system of ODEs becomes computationally prohibitive as the cluster sizes increase. Efficient deterministic simulations propose an approximation of the equations for large clusters by a single Fokker-Planck equations. Nevertheless this approach is still limited by the number of equations to solve in the case of complex materials. Fully stochastic simulations see the RECD as a master equation, hence reducing the number of equations to solve to the number of stochastic particles, but are limited by the high frequency of certain events. The proposed algorithm is based on a splitting of the dynamics and combines deterministic and stochastic approaches. It is generic (allowing different stochastic approaches such as a jump process or a Langevin dynamics based on the Fokker-Planck approximation) and is highly parallelizable. The accuracy of this new algorithm is illustrated in a case of vacancy clustering of materials under thermal ageing. Numerical analysis of the algorithm shows that the errors due to the splitting (a standard Lie-Trotter splitting) and due to the stochastic approaches decrease according to the theory, i.e. respectively linearly with the time step and as N-1/2, N being the number of stochastic particles. The error due to the Fokker-Planck approximation is currently under study.